beauty filter
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Switzerland > Fribourg > Fribourg (0.04)
- Europe > Spain (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur (0.04)
- Research Report > Experimental Study (0.46)
- Research Report > New Finding (0.46)
- Information Technology > Services (0.68)
- Law (0.68)
- Government (0.68)
- Health & Medicine > Therapeutic Area (0.46)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Switzerland > Fribourg > Fribourg (0.04)
- Europe > Spain (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur (0.04)
- Research Report > Experimental Study (0.46)
- Research Report > New Finding (0.46)
- Information Technology > Services (0.68)
- Law (0.68)
- Government (0.68)
- Health & Medicine > Therapeutic Area (0.46)
Scientists reveal the common dating app mistake that could make potential dates think you're stupid
When it comes to online dating, it may tempting to apply a beauty filter to bag yourself a date. But be warned, ladies – as this can make you appear less intelligent, according to a study. An online study involving more than 2,700 participants asked them to rate images of 462 individuals. These images consisted of original faces and their corresponding'beautified' versions. None of the participants were told that some images had a beauty filter applied, and none were given'before' and'after' pictures of the same individual.
Lookism: The overlooked bias in computer vision
Gulati, Aditya, Lepri, Bruno, Oliver, Nuria
In recent years, there have been significant advancements in computer vision which have led to the widespread deployment of image recognition and generation systems in socially relevant applications, from hiring to security screening. However, the prevalence of biases within these systems has raised significant ethical and social concerns. The most extensively studied biases in this context are related to gender, race and age. Yet, other biases are equally pervasive and harmful, such as lookism, i.e., the preferential treatment of individuals based on their physical appearance. Lookism remains under-explored in computer vision but can have profound implications not only by perpetuating harmful societal stereotypes but also by undermining the fairness and inclusivity of AI technologies. Thus, this paper advocates for the systematic study of lookism as a critical bias in computer vision models. Through a comprehensive review of existing literature, we identify three areas of intersection between lookism and computer vision. We illustrate them by means of examples and a user study. We call for an interdisciplinary approach to address lookism, urging researchers, developers, and policymakers to prioritize the development of equitable computer vision systems that respect and reflect the diversity of human appearances.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > New York (0.04)
- Europe > Spain (0.04)
- (2 more...)
- Overview (0.66)
- Questionnaire & Opinion Survey (0.54)
- Research Report (0.40)
- Government (0.66)
- Health & Medicine > Therapeutic Area (0.46)
Racial Bias in the Beautyverse
This short paper proposes a preliminary and yet insightful investigation of racial biases in beauty filters techniques currently used on social media. The obtained results are a call to action for researchers in Computer Vision: such biases risk being replicated and exaggerated in the Metaverse and, as a consequence, they deserve more attention from the community.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > Spain (0.04)
- Europe > Germany > Lower Saxony > Gottingen (0.04)
OpenFilter: A Framework to Democratize Research Access to Social Media AR Filters
Riccio, Piera, Psomas, Bill, Galati, Francesco, Escolano, Francisco, Hofmann, Thomas, Oliver, Nuria
Augmented Reality or AR filters on selfies have become very popular on social media platforms for a variety of applications, including marketing, entertainment and aesthetics. Given the wide adoption of AR face filters and the importance of faces in our social structures and relations, there is increased interest by the scientific community to analyze the impact of such filters from a psychological, artistic and sociological perspective. However, there are few quantitative analyses in this area mainly due to a lack of publicly available datasets of facial images with applied AR filters. The proprietary, close nature of most social media platforms does not allow users, scientists and practitioners to access the code and the details of the available AR face filters. Scraping faces from these platforms to collect data is ethically unacceptable and should, therefore, be avoided in research. In this paper, we present OpenFilter, a flexible framework to apply AR filters available in social media platforms on existing large collections of human faces. Moreover, we share FairBeauty and B-LFW, two beautified versions of the publicly available FairFace and LFW datasets and we outline insights derived from the analysis of these beautified datasets.
- North America > United States > New York > New York County > New York City (0.04)
- Oceania > Australia (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- (3 more...)
- Law (0.93)
- Information Technology > Services (0.68)
- Health & Medicine > Therapeutic Area (0.46)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (1.00)
Our favorite stories of 2021
The end of the year is always a good time for a bit of introspection and self-reflection. It also seems right to pause to celebrate some of the high points from a challenging year. We asked our writers and editors to look back over all the stories we published in 2021 and tell us which ones really stood out. Which stories did their colleagues publish that made them proud to work for MIT Technology Review? (And no, they weren't allowed to choose their own.) An edited version of the list runs below, but there was one story that our team kept coming back to as a touchstone for the kind of coverage that we do: Karen Hao's investigation into Facebook.
- North America > United States > California (0.15)
- Europe > Ukraine (0.04)
- Europe > Russia (0.04)
- (3 more...)
Twitter's AI bounty program reveals bias toward young, pretty white people
Twitter's first bounty program for AI bias has wrapped up, and there are already some glaring issues the company wants to address. CNET reports that grad student Bogdan Kulynych has discovered that photo beauty filters skew the Twitter saliency (importance) algorithm's scoring system in favor of slimmer, younger and lighter-skinned (or warmer-toned) people. The findings show that algorithms can "amplify real-world biases" and conventional beauty expectations, Twitter said. Halt AI learned that Twitter's saliency algorithm "perpetuated marginalization" by cropping out the elderly and people with disabilities. Researcher Roya Pakzad, meanwhile, found that the saliency algorithm prefers cropping Latin writing over Arabic.
Podcast: In the AI of the Beholder
Ideas about what constitutes "beauty" are complex, subjective, and by no means limited to physical appearances. Elusive though it is, everyone wants more of it. That means big business and increasingly, people harnessing algorithms to create their ideal selves in the digital and, sometimes, physical worlds. In this episode, we explore the popularity of beauty filters, and sit down with someone who's convinced his software will show you just how to nip and tuck your way to a better life. This episode was reported by Tate Ryan-Mosley, and produced by Jennifer Strong, Emma Cillekens, Karen Hao and Anthony Green. Strong: Beauty has always been one of society's greatest obsessions. And for as long as we've worshipped it… we've also found ways to change and enhance it. From makeup and clothes... to airbrushing photos… or a surgical nip and tuck. Strong: You may not realize it...but this technology is right at your fingertips.
- North America > United States > Minnesota (0.05)
- North America > United States > Maryland (0.04)
- Information Technology (0.47)
- Health & Medicine (0.46)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.69)